We present a deep neural network-based approach to image quality assessment(IQA). The network is trained end-to-end and comprises ten convolutional layersand five pooling layers for feature extraction, and two fully connected layersfor regression, which makes it significantly deeper than related IQA models.Unique features of the proposed architecture are that: 1) with slightadaptations it can be used in a no-reference (NR) as well as in afull-reference (FR) IQA setting and 2) it allows for joint learning of localquality and local weights, i.e., relative importance of local quality to theglobal quality estimate, in an unified framework. Our approach is purelydata-driven and does not rely on hand-crafted features or other types of priordomain knowledge about the human visual system or image statistics. We evaluatethe proposed approach on the LIVE, CISQ, and TID2013 databases as well as theLIVE In the wild image quality challenge database and show superior performanceto state-of-the-art NR and FR IQA methods. Finally, cross-database evaluationshows a high ability to generalize between different databases, indicating ahigh robustness of the learned features.
展开▼